onboard camera
VROOM - Visual Reconstruction over Onboard Multiview
Yadav, Yajat, Bharadwaj, Varun, Korrapati, Jathin, Baranwal, Tanish
W e introduce VROOM, a system for reconstructing 3D models of F ormula 1 circuits using only onboard camera footage from racecars. Leveraging video data from the 2023 Monaco Grand Prix, we address video challenges such as high-speed motion and sharp cuts in camera frames. Our pipeline analyzes different methods such as DROID-SLAM, AnyCam, and Monst3r and combines preprocessing techniques such as different methods of masking, temporal chunking, and resolution scaling to account for dynamic motion and computational constraints. W e show that Vroom is able to partially recover track and vehicle trajectories in complex environments. These findings indicate the feasibility of using onboard video for scalable 4D reconstruction in real-world settings.
Model Predictive Control For Multiple Castaway Tracking with an Autonomous Aerial Agent
Anastasiou, Andreas, Papaioannou, Savvas, Kolios, Panayiotis, Panayiotou, Christos G.
Over the past few years, a plethora of advancements in Unmanned Areal Vehicle (UAV) technology has paved the way for UAV-based search and rescue operations with transformative impact to the outcome of critical life-saving missions. This paper dives into the challenging task of multiple castaway tracking using an autonomous UAV agent. Leveraging on the computing power of the modern embedded devices, we propose a Model Predictive Control (MPC) framework for tracking multiple castaways assumed to drift afloat in the aftermath of a maritime accident. We consider a stationary radar sensor that is responsible for signaling the search mission by providing noisy measurements of each castaway's initial state. The UAV agent aims at detecting and tracking the moving targets with its equipped onboard camera sensor that has limited sensing range. In this work, we also experimentally determine the probability of target detection from real-world data by training and evaluating various Convolutional Neural Networks (CNNs). Extensive qualitative and quantitative evaluations demonstrate the performance of the proposed approach.
- Europe > Middle East > Cyprus (0.29)
- Asia (0.14)
- Aerospace & Defense (0.68)
- Energy > Oil & Gas > Upstream (0.61)
Distributed Perception Aware Safe Leader Follower System via Control Barrier Methods
Suganda, Richie R., Tran, Tony, Pan, Miao, Fan, Lei, Lin, Qin, Hu, Bin
This paper addresses a distributed leader-follower formation control problem for a group of agents, each using a body-fixed camera with a limited field of view (FOV) for state estimation. The main challenge arises from the need to coordinate the agents' movements with their cameras' FOV to maintain visibility of the leader for accurate and reliable state estimation. To address this challenge, we propose a novel perception-aware distributed leader-follower safe control scheme that incorporates FOV limits as state constraints. A Control Barrier Function (CBF) based quadratic program is employed to ensure the forward invariance of a safety set defined by these constraints. Furthermore, new neural network based and double bounding boxes based estimators, combined with temporal filters, are developed to estimate system states directly from real-time image data, providing consistent performance across various environments. Comparison results in the Gazebo simulator demonstrate the effectiveness and robustness of the proposed framework in two distinct environments.
- North America > United States > Texas > Harris County > Houston (0.15)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
An ARGoS plug-in for the Crazyflie drone
Stolfi, Daniel H., Danoy, Grégoire
We present a new plug-in for the ARGoS swarm robotic simulator to implement the Crazyflie drone, including its controllers, sensors, and some expansion decks. We have based our development on the former Spiri drone, upgrading the position controller, adding a new speed controller, LED ring, onboard camera, and battery discharge model. We have compared this new plug-in in terms of accuracy and efficiency with data obtained from real Crazyflie drones. All our experiments showed that the proposed plug-in worked well, presenting high levels of accuracy. We believe that this is an important contribution to robot simulations which will extend ARGoS capabilities through the use of our proposed, open-source plug-in.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Switzerland (0.04)
Vision-Based Autonomous Navigation for Unmanned Surface Vessel in Extreme Marine Conditions
Ahmed, Muhayyuddin, Bakht, Ahsan Baidar, Hassan, Taimur, Akram, Waseem, Humais, Ahmed, Seneviratne, Lakmal, He, Shaoming, Lin, Defu, Hussain, Irfan
Visual perception is an important component for autonomous navigation of unmanned surface vessels (USV), particularly for the tasks related to autonomous inspection and tracking. These tasks involve vision-based navigation techniques to identify the target for navigation. Reduced visibility under extreme weather conditions in marine environments makes it difficult for vision-based approaches to work properly. To overcome these issues, this paper presents an autonomous vision-based navigation framework for tracking target objects in extreme marine conditions. The proposed framework consists of an integrated perception pipeline that uses a generative adversarial network (GAN) to remove noise and highlight the object features before passing them to the object detector (i.e., YOLOv5). The detected visual features are then used by the USV to track the target. The proposed framework has been thoroughly tested in simulation under extremely reduced visibility due to sandstorms and fog. The results are compared with state-of-the-art de-hazing methods across the benchmarked MBZIRC simulation dataset, on which the proposed scheme has outperformed the existing methods across various metrics.
- North America > United States (0.14)
- Asia > Middle East > UAE (0.14)
- Europe > Norway > Norwegian Sea (0.04)
- (2 more...)
Meet the WALL-E lookalike robot that disinfects surfaces, opens doors and delivers pills to patients
WALL-E may have roamed the earth alone 800 years in the future. But now a lookalike robot could be coming to the UK as early as 2023. And rather than just pick up litter like the Disney creation, this one is all-action. Aeolus Robotics claim their android can act as a security guard, a hospital cleaner and even take over the job of staff in care homes. Aeolus Robotics claim their android, named'Aeo', can act as a security guard, a hospital cleaner and even take over the job of staff in care homes Use pincher arm to open doors, operate lifts or close windows.
- Asia (0.52)
- Europe > United Kingdom (0.38)
Technique Improves AI Ability to Understand 3D Space Using 2D Images
The work would help the artificial intelligence used in autonomous vehicles navigate in relation to other vehicles, using the two-dimensional images it receives from an onboard camera. A technique developed by researchers at North Carolina State University (NC State) uses two-dimensional (2D) images to improve the ability of artificial intelligence (AI) programs to identify three-dimensional (3D) objects. Called MonoCon, the technique could improve the navigation of autonomous vehicles in relation to other vehicles using 2D images from onboard cameras, which are less expensive than LiDAR sensors. MonoCon can put 3D objects identified in 2D images into a "bounding box," which indicates to the AI the outermost edges of the objects. Said NC State's Tianfu Wu, "In addition to asking the AI to predict the camera-to-object distance and the dimensions of the bounding boxes, we also ask the AI to predict the locations of each of the box's eight points and its distance from the center of the bounding box in two dimensions," which "helps the AI more accurately identify and predict 3D objects based on 2D images."
- North America > United States > North Carolina (0.29)
- North America > United States > District of Columbia > Washington (0.09)
Pico4ML Brings Machine Learning To the Raspberry Pi Pico
The Raspberry Pi Pico wouldn't be the first board that comes to mind for machine learning, but it seems that the $4 may be a viable platform for machine learning projects. The Pico4ML from Arducam is an RP2040 based board with an onboard camera, screen, and microphone that looks to be the same size as the Raspberry Pi Pico. Arducam is probably better known for its range of cameras for the Raspberry Pi and Nvidia Jetson boards, but since the release of the Raspberry Pi Pico, they have been tinkering with machine learning projects powered by the Pico. The Arducam Pico4ML is their first RP2040-based board and the first board to feature an onboard camera, a microphone that you can use for "wake word" detection, a screen, and an Inertial Measurement Unit (IMU) that can detect gestures. The Pico4ML is intended for machine learning and artificial intelligence projects based around Tiny Machine Learning (TinyML).
How to keep drones flying when a motor fails
Robotics researchers at the University of Zurich show how onboard cameras can be used to keep damaged quadcopters in the air and flying stably – even without GPS. As anxious passengers are often reassured, commercial aircrafts can easily continue to fly even if one of the engines stops working. But for drones with four propellers – also known as quadcopters – the failure of one motor is a bigger problem. With only three rotors working, the drone loses stability and inevitably crashes unless an emergency control strategy sets in. Researchers at the University of Zurich and the Delft University of Technology have now found a solution to this problem: They show that information from onboard cameras can be used to stabilize the drone and keep it flying autonomously after one rotor suddenly gives out.
A Mosquito Pick-and-Place System for PfSPZ-based Malaria Vaccine Production
Phalen, Henry, Vagdargi, Prasad, Schrum, Mariah L., Chakravarty, Sumana, Canezin, Amanda, Pozin, Michael, Coemert, Suat, Iordachita, Iulian, Hoffman, Stephen L., Chirikjian, Gregory S., Taylor, Russell H.
The treatment of malaria is a global health challenge that stands to benefit from the widespread introduction of a vaccine for the disease. A method has been developed to create a live organism vaccine using the sporozoites (SPZ) of the parasite Plasmodium falciparum (Pf), which are concentrated in the salivary glands of infected mosquitoes. Current manual dissection methods to obtain these PfSPZ are not optimally efficient for large-scale vaccine production. We propose an improved dissection procedure and a mechanical fixture that increases the rate of mosquito dissection and helps to deskill this stage of the production process. We further demonstrate the automation of a key step in this production process, the picking and placing of mosquitoes from a staging apparatus into a dissection assembly. This unit test of a robotic mosquito pick-and-place system is performed using a custom-designed micro-gripper attached to a four degree of freedom (4-DOF) robot under the guidance of a computer vision system. Mosquitoes are autonomously grasped and pulled to a pair of notched dissection blades to remove the head of the mosquito, allowing access to the salivary glands. Placement into these blades is adapted based on output from computer vision to accommodate for the unique anatomy and orientation of each grasped mosquito. In this pilot test of the system on 50 mosquitoes, we demonstrate a 100% grasping accuracy and a 90% accuracy in placing the mosquito with its neck within the blade notches such that the head can be removed. This is a promising result for this difficult and non-standard pick-and-place task.
- North America > United States > Maryland > Baltimore (0.14)
- Asia > Singapore (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (9 more...)